9 research outputs found
CMOS-Memristor Dendrite Threshold Circuits
Non-linear neuron models overcomes the limitations of linear binary models of
neurons that have the inability to compute linearly non-separable functions
such as XOR. While several biologically plausible models based on dendrite
thresholds are reported in the previous studies, the hardware implementation of
such non-linear neuron models remain as an open problem. In this paper, we
propose a circuit design for implementing logical dendrite non-linearity
response of dendrite spike and saturation types. The proposed dendrite cells
are used to build XOR circuit and intensity detection circuit that consists of
different combinations of dendrite cells with saturating and spiking responses.
The dendrite cells are designed using a set of memristors, Zener diodes, and
CMOS NOT gates. The circuits are designed, analyzed and verified on circuit
boards.Comment: Zhanbossinov, K. Smagulova, A. P. James, CMOS-Memristor Dendrite
Threshold Circuits, 2016 IEEE APCCAS, Jeju, Korea, October 25-28, 201
Thermal Heating in ReRAM Crossbar Arrays: Challenges and Solutions
Increasing popularity of deep-learning-powered applications raises the issue
of vulnerability of neural networks to adversarial attacks. In other words,
hardly perceptible changes in input data lead to the output error in neural
network hindering their utilization in applications that involve decisions with
security risks. A number of previous works have already thoroughly evaluated
the most commonly used configuration - Convolutional Neural Networks (CNNs)
against different types of adversarial attacks. Moreover, recent works
demonstrated transferability of the some adversarial examples across different
neural network models. This paper studied robustness of the new emerging models
such as SpinalNet-based neural networks and Compact Convolutional Transformers
(CCT) on image classification problem of CIFAR-10 dataset. Each architecture
was tested against four White-box attacks and three Black-box attacks. Unlike
VGG and SpinalNet models, attention-based CCT configuration demonstrated large
span between strong robustness and vulnerability to adversarial examples.
Eventually, the study of transferability between VGG, VGG-inspired SpinalNet
and pretrained CCT 7/3x1 models was conducted. It was shown that despite high
effectiveness of the attack on the certain individual model, this does not
guarantee the transferability to other models.Comment: 18 page
Design of CMOS-memristor Circuits for LSTM architecture
Long Short-Term memory (LSTM) architecture is a well-known approach for
building recurrent neural networks (RNN) useful in sequential processing of
data in application to natural language processing. The near-sensor hardware
implementation of LSTM is challenged due to large parallelism and complexity.
We propose a 0.18 m CMOS, GST memristor LSTM hardware architecture for
near-sensor processing. The proposed system is validated in a forecasting
problem based on Keras model
DESIGN OF CMOS-MEMRISTOR CIRCUIT OF LSTM ARCHITECTUR
The growing amount of data, the dawn of Moore's law, and the need for machines with
human intelligence dictated several new concepts in computing and chip design. Existing physical
limitations of Complementary Metal-Oxide Semiconductor (CMOS) transistors and von-Neumann
bottleneck problems showed that there is a need for the development of in-memory computing
devices using technologies beyond-CMOS. The architecture of the long short-term memory
(LSTM) neural network makes it an ideal candidate for modern computing systems. Recurrent
connections and built-in memory of the LSTM network also allow us to process different types of
data, including ones with temporal features and dependency.
The realization of LSTM, and other artificial neural networks (ANNs), implies a large
amount of parallel computations. Therefore, in most cases, their training and inferencing are
implemented on modern computing systems with the help of a graphical processing unit (GPU).
In addition, there are several available solutions for energy and area efficient inference of neural
networks based on field-programmable gate arrays (FPGA) and application-specific integrated
circuits (ASIC) platforms in both digital and analog domains.
In 2008, the discovery of a new device called 'memristor’, which acts as an artificial synapse,
brought attention to developments of memristive artificial neural networks ANNs. Due to their
nanoscale size and non-volatile nature, memristor crossbar arrays (MCA) allow several orders of
magnitude faster dot-product multiplication and require a smaller area and lower energy
consumption.
The recent successful works where memristors were used as a dot-product engine include
“A convolutional neural network accelerator with in-situ analog arithmetic in crossbars” (ISAAC)
and “A programmable ultra-efficient memristor-based accelerator for machine learning inference”
(PUMA). Nevertheless, training of ANN on FPGA and ASIC remains a challenging problem.
Therefore, the majority of memristive platforms are proposed only for the acceleration of neural
networks with pre-trained parameters.
In this thesis work, the design of an analog CMOS-memristor accelerator implementing long
short-term memory (LSTM) recurrent neural network at the edge is proposed. The circuit design
of a single LSTM unit consists of two main parts: 1) a dot-product engine based on memristor
crossbar array using “one weight -two memristors” scheme; and 2) CMOS circuit blocks used to realize arithmetic and non-linear functions within LSTM unit. The proposed design was validated
on machine learning problems such as prediction and classification. The performance of the
analog LSTM circuit design was compared with other types of neural networks and neuromorphic
systems, including a single perceptron, FNN, DNN, and modified HTM. Besides, analyses of
memristor state variability in hybrid CNN-LSTM and CNN implementation for image
classification have been performed successfully
A survey on LSTM memristive neural network architectures and applications
The recurrent neural networks (RNN) found to be an effective tool for approximating dynamic systems dealing with time and order dependent data such as video, audio and others. Long short-term memory (LSTM) is a recurrent neural network with a state memory and multilayer cell structure. Hardware acceleration of LSTM using memristor circuit is an emerging topic of study. In this work, we look at history and reasons why LSTM neural network has been developed. We provide a tutorial survey on the existing LSTM methods and highlight the recent developments in memristive LSTM architectures
Generalised Analog LSTMs Recurrent Modules for Neural Computing
Funding Information: Authors acknowledge the funding support provided through Maker Village, Kochi by Bharat Electronics Limited CSR grant through Defence Accelerator Program of iDEX with reference number #2021/01/BEL. Funding Information: Authors acknowledge the funding support provided through Maker Village, Kochi by Bharat Electronics Limited CSR grant Publisher Copyright: © Copyright © 2021 Adam, Smagulova and James.The human brain can be considered as a complex dynamic and recurrent neural network. There are several models for neural networks of the human brain, that cover sensory to cortical information processing. Large majority models include feedback mechanisms that are hard to formalise to realistic applications. Recurrent neural networks and Long short-term memory (LSTM) inspire from the neuronal feedback networks. Long short-term memory (LSTM) prevent vanishing and exploding gradients problems faced by simple recurrent neural networks and has the ability to process order-dependent data. Such recurrent neural units can be replicated in hardware and interfaced with analog sensors for efficient and miniaturised implementation of intelligent processing. Implementation of analog memristive LSTM hardware is an open research problem and can offer the advantages of continuous domain analog computing with relatively low on-chip area compared with a digital-only implementation. Designed for solving time-series prediction problems, overall architectures and circuits were tested with TSMC 0.18 μm CMOS technology and hafnium-oxide (HfO2) based memristor crossbars. Extensive circuit based SPICE simulations with over 3,500 (inference only) and 300 system-level simulations (training and inference) were performed for benchmarking the system performance of the proposed implementations. The analysis includes Monte Carlo simulations for the variability of memristors' conductance, and crossbar parasitic, where non-idealities of hybrid CMOS-memristor circuits are taken into the account.Peer reviewe
Low Power Near-sensor Coarse to Fine XOR based Memristive Edge Detection [Article]
https://ieeexplore.ieee.org/document/8649972In this paper, we propose XOR based memristive edge detector circuit that is integrated into a near sensor log-linear CMOS pixel. Memristor threshold logic was used to design NAND gates, which serve as a building block for XOR gates. For validation of proposed circuit functionality hardware simulation of logic gates with a pixel pair was conducted using TSMC 0.18um technology and system-level simulation of the proposed circuit using SPICE models. The proposed method operates in low power and takes a small area on chip. The power consumption of one pixel is 1.16uW and total area 36.72 um 2 without photosensing component. The power consumption of NAND circuit is 1.11pW and total area 32.4um 2
Hardware implementation of memristor-based artificial neural networks
Abstract Artificial Intelligence (AI) is currently experiencing a bloom driven by deep learning (DL) techniques, which rely on networks of connected simple computing units operating in parallel. The low communication bandwidth between memory and processing units in conventional von Neumann machines does not support the requirements of emerging applications that rely extensively on large sets of data. More recent computing paradigms, such as high parallelization and near-memory computing, help alleviate the data communication bottleneck to some extent, but paradigm- shifting concepts are required. Memristors, a novel beyond-complementary metal-oxide-semiconductor (CMOS) technology, are a promising choice for memory devices due to their unique intrinsic device-level properties, enabling both storing and computing with a small, massively-parallel footprint at low power. Theoretically, this directly translates to a major boost in energy efficiency and computational throughput, but various practical challenges remain. In this work we review the latest efforts for achieving hardware-based memristive artificial neural networks (ANNs), describing with detail the working principia of each block and the different design alternatives with their own advantages and disadvantages, as well as the tools required for accurate estimation of performance metrics. Ultimately, we aim to provide a comprehensive protocol of the materials and methods involved in memristive neural networks to those aiming to start working in this field and the experts looking for a holistic approach